-
What's the Deal with Big Data?
-
Posted on Feb 11, 2013 10:11:23 AM | Adrian Gardner
0 Comments
| Permalink
|
-
Big Data affects nearly all of us in NASA and it is exploding
– the average annual growth rate is 60% and by end of 2012, the digital
universe is estimated to be 2.7 zettabytes. Goddard manages enormous volumes of data
related to building and maintaining satellites, analytical simulations, and support
functions.
So what is Big Data? In the IT industry, Big Data is
defined by four V’s: volume, velocity, variety, and veracity. Volume is the sheer amount of data. Velocity is the speed with which new
data is created and existing data modified. Variety
is the management of various data formats and types. Veracity is a concern of many business leaders; it is estimated
that one in three CEO’s don’t trust the information they use. Also, I’ll add a
fifth V, for Value, or the considerable
usefulness of Big Data. It
allows us to see data patterns and anomalies and shifts
our decision-making from being reactive to proactive. So how is NASA managing the many challenges
of Big Data? The NASA Open Government Plan outlines many of our
approaches such as: managing and processing; archiving and distribution; and sharing
data.
On managing and processing, here’s
an example. The Mission
Data Processing and Control System (MPCS) was recently used
by the Curiosity rover on Mars. MPCS interfaces with NASA’s deep-space network,
and in turn the Mars Reconnaissance Orbiter, to relay data to and from
Curiosity and process the raw data in real time, a process which previously
took hours, if not days, to accomplish.
For archiving and distribution, consider the Atmospheric Science Data Center (ASDC) at Langley, which is
processes, archives and distributes Earth science data, and the Planetary Data System (PDS), which contains considerable planetary science data. PDS offers access
to over 100 TB of space images, telemetry, models, etc. associated with
planetary missions from the past 30 years.
NASA
is a leader at sharing Big Data. The
Earth Observing System Data and Information System (EOSDIS) manages and shares
Earth science data from various sources – satellites, aircraft, field
measurements, etc. The EOSDIS science operations are performed within 12
interconnected Distributed Active Archive Centers (DAACs), each with specific
responsibilities for producing, archiving, and distributing Earth science data
products.
To enhance our ability to manage Big
Data, I believe that the IT industry should adopt the Predictive Model Markup Language (PMML), an XML-based,
vendor-agnostic markup
language that provides an easier way to share predictive
analytical model data. With PMML, proprietary issues and incompatibilities are no longer a
barrier to the exchange of data and models between applications.
One real world example of how NASA
leverages its expertise in Big Data, and directly affects your life, is in the
field of airline safety. NASA analyzes data from planes to study safety
implications, which in turn helps to improve the maintenance procedures of
commercial airlines and potentially prevent equipment failures. Using advanced
algorithms, the agency helps sift through mountains of unstructured data to
find key information that helps predict and prevent safety problems.
-
Meeting the Needs of the Goddard Customer
-
Posted on Oct 23, 2012 03:15:42 PM | Adrian Gardner
| Permalink
|
-
BYOD, or “bring your own device,” is an increasing request by
Goddard employees and contractors. Simply, BYOD enables staff to use their own
computers, smartphones, tablets and other technologies at work. There are many benefits – allowing users to
choose devices they are comfortable with improves job satisfaction. BYOD makes telecommuting more
feasible and reduces duplicative
devices. Goddard benefits from BYOD too – we save money on hardware, software
and device maintenance. And, staff tends to upgrade to the latest hardware and
software quicker than Goddard does.
But there are a number of challenges with BYOD. Smartphones and tablets are susceptible
to worms, viruses, Trojans and spyware just like desktops. Eavesdropping is an
issue since carrier-based wireless networks lack end-to-end security. Theft of
devices can result in a loss of sensitive NASA data. Finally, users may be
concerned that Goddard has access to sensitive personal data.
Virtualization helps us to overcome these challenges. Virtual
desktops and applications are delivered to end users on any device. Little, if
any, data is actually stored on the device; instead, data is requested and
displayed as needed, reducing the risk of data loss. We conducted a Virtual
Desktop Infrastructure (VDI) pilot this summer and will initiate a
proof-of-concept study by allowing Goddard employees and visiting scientists to
connect to NASA data using VDI and their own devices.
We are developing policies for allowing personal devices to
connect to our network. These policies will cover who gets a mobile device, who
pays for it, what constitutes acceptable use, user responsibilities, and the
range of devices ITCD will support. Our Mobile Device Management (MDM) system will
monitor, manage and support these personal devices. It will provide central
remote management of devices including the distribution of applications, data
and configuration settings, and remote wipes. And, it will position us to better meet
the needs of our customers.
-
A Reduction In Resources Can Mean Outsourcing for Federal Agencies
-
Posted on Aug 30, 2012 11:24:11 AM | Adrian Gardner
0 Comments
| Permalink
|
-
Last week, I shared an article, Federal IT Faces Budget Pressure, with Goddard’s IT leadership team.
This article predicts that federal
agencies will have to move towards vendor-managed cloud computing solutions
within the next 5 years to accommodate future budget cuts. We
recently received guidance that 10% cuts would be made to all federal IT
budgets within the next fiscal year, so the author’s prediction of this cost-saving
initiative comes as no surprise. During
the discussion, I advised our team that outsourcing initiatives have become a
harsh reality for the way in which we do business today, and we need to
continue to be proactive and adapt.
An increased reliance on vendors for the management and
maintenance of some of our services is not against the norm for Goddard Space
Flight Center or NASA. For example, if
we examine the amazing work underway at our Wallops Flight Facility and the
private-public partnership that has been established with Orbital Corporation,
it is a visible example our future business model. Orbital’s Commercial Orbital Transportation
Services (COTS) operational system consists of an International Space Station
(ISS) visiting vehicle, a new privately-developed medium-class launch vehicle,
and all necessary mission planning and operations facilities and services.
Orbital is developing and qualifying a new launch vehicle
(called Antares) to enable lower-cost COTS launches as well as future NASA
science and exploration, commercial and national security space missions.
Antares will combine Orbital’s industry-leading experience in developing,
building and operating small launch vehicles with Wallops Flight Facilities’
industry leading range operations to provide services in a more cost-effective
fashion.
Our IT programs, organizations, and scientific missions will
need to use this same strategic approach with Goddard’s infrastructure
capabilities- namely, containerized computing- considering our fiscal constraints. Containerized computing offers a fee-for-service
model that can be monitored, measured, and tailored to scientific demands that
we have at each Center. A reduction in our IT funding will inevitably
decrease our ability to procure appropriate resources for cloud/containerized
computing services; therefore, outsourcing will potentially be both a short and
long-term solution. Short-term we would
be able to “do more with less”, and long-term, we would be able to facilitate
partnerships to ensure and sustain security compliance, infrastructure
compatibility, etc.
Overall, vendor-managed cloud computing will allow us to re-prioritize
and reallocate those existing resources that we do have into other IT service areas. We will then be able to focus our talents on
improving operating inefficiencies and innovation across the Center. It may be a shift in the way that the federal
government has done business previously, but for NASA, it’s business as usual.
-
The Buzz Around IT Governance
-
Posted on Dec 08, 2011 01:12:10 PM | Adrian Gardner
| Permalink
|
-
Information Technology (IT) Governance seems to be the hot
new buzzword across NASA and the
CIO community. On September 29,
2011 NASA Mission Support Council approved a governing model for improving the management of NASA IT. The Agency spends approximately $1.4B annually on IT. Unfortunately,
today’s governance structure makes it increasingly more difficult to manage IT
spend at the Agency and Center level.
The meaning of IT governance varies depending on who’s having the discussion. One could simply state that IT
Governance is a process that puts structure and discipline around how
organizations align IT strategy with business strategy, ensuring that its functional
elements stay on track to achieve their strategies and goals, and implementing
good ways to measure IT’s performance. It makes sure that all stakeholders’
interests are taken into account and that processes provide measurable results.
I believe that every organization—large
and small, public and private—needs a way to ensure that the IT function
sustains the organization’s mission, strategies, and objectives. The level of
sophistication that we must apply to IT governance, however, may vary according
to size, culture, industry or applicable regulations. A general principal is
that the larger and more regulated the organization, the more detailed the IT
governance structure should be.
IT governance aligns the Agency’s and Center’s IT strategies with its mission and business strategy and ensures that our Center will stay on
track to achieve programmatic and mission goals, in a manner that produces an IT architecture that is
efficient, effective, scalable,
and measurable. We must also take
into account stakeholder (e.g.,
Congress, public sector partners, industry, academia, etc.) interests and take
them into account and establish
processes to ensure measurable results.
Essentially, an IT governance framework should answer these fundamental
questions:
- How closely aligned is IT to the strategic
direction of the Agency or Center?
- How are IT requirements identified,
validated, and funded?
- How is the IT managed overall?
- What are the specific metrics required
to effectively manage the Center’s IT resources?
- What is the return on investment?
- What are the risks and how are they being
managed?
As detailed in the GSFC IT Strategic Plan, the IT governance
model proposed for GSFC is supportive and aligns to the Agency IT governance processes
by providing the framework and visibility to ensure the Center improves IT as a
strategic asset. GSFC’s IT governance policy incorporates the five essential elements
of effective governance:
- Strategic
Alignment – link the mission to IT investment
- Value
Delivery – ensure the IT investment delivers the benefits promised
- Resource
Management – project management personified
- Risk
Management – establish a formal framework for analysis and management of all risks
- Performance
Measures – are we meeting the business goals, how are we reporting them, and how
often are we measuring
Center-wide IT investments, programmatic and institutional, will
be reviewed by the governance boards comprised of representatives from every organization,
creating a governance structure for IT that is truly federated, providing a strong
balance between organizational and enterprise innovation.
So how do we successfully embrace the structures and
processes of IT governance? We align our IT strategy with the missions of GSFC to
deliver maximum value, and establish a solid portfolio management system that
integrates and lays out project components to clearly identify how assigned
resources lead to the accomplishment of our goals. This structured policy, along with senior managing governance
boards, combines to ensure that IT investments are synchronized with the
missions of the various organizations, thereby providing full value to GSFC.
Using IT as a strategic asset must be one of the central tenants
of the Center going forward and IT governance is one of the mechanisms that
will propel GSFC forward as we take on the new and exciting challenges and opportunities
that we will face over the next 5 years.
I welcome your feedback.
-
The Changing Role of the Chief Information Officer
-
Posted on Aug 30, 2011 09:51:29 AM | Adrian Gardner
0 Comments
| Permalink
|
-
If
there’s one thing that’s certain in today’s changing economic and technological
climate, it’s that “business as usual” is a thing of the past. To remain at the
forefront of scientific discovery within increasing fiscal constraints, we are
compelled as an Agency to do things differently.
On August 8, 2011 the White House released a pivotal memorandum regarding Chief
Information Officer Authorities (OMB M-11-29) that underscores this point. The
memo explicitly redirects the authority and responsibility of Agency CIOs away
from just policymaking and infrastructure maintenance, to encompass true
portfolio management for all information technology (IT) investments. The
memorandum explains that the shift is intended to eliminate the barriers that
get in the way of effective management of Federal IT programs.
The memo <http://www.whitehouse.gov/blog/2011/08/08/changing-role-federal-chief-information-officers> now directs
Agency CIOs to take a lead role in four main areas:
1. Governance
· Drive the
investment review process for the entire IT portfolio of an Agency
· Lead “Tech-Stat”
sessions intended to improve line-of-sight between project teams and senior
executives
· Terminate or turn
around one-third of all underperforming IT investments by June of 2012
2. Commodity IT
· Eliminate
duplication and rationalize IT investments
· Pool the agency’s
purchasing power across the entire organization to drive down costs and improve
service for commodity IT
· Show a preference
for using shared services instead of standing up separate independent services
3. Program Management
· Identify, recruit,
and hire top IT program management talent
· Take
accountability for the performance of IT program managers
4. Information Security
· Implement an
agency-wide security program
· Implement
continuous monitoring and standardized risk assessment processes supported by
“CyberStat” sessions
As
I read the memorandum, I was struck by the timing—the Center is positioned to
take advantage of these very improvements. We need to begin leveraging IT as a
strategic asset for growth, which can occur only if leadership has visibility
and influence into the broad scope of our IT investments (e.g., infrastructure,
mission, and corporate). If we pool our resources, reduce duplication, create
strategic partnerships, and leverage our IT investments in a strategic manner,
we can provide increased capabilities to the Center at a competitive or reduced
price point—in other words, we can position the Center “to remain
competitive and thrive in any budget climate.”
The role of the CIO is changing to encompass more visibility and responsibility
for managing a broader spectrum of IT. I look forward to partnering and
collaborating with our customers across the Center as we develop and execute a
strategy for IT that enables the mission of the Center and Agency.
First steps for
Goddard
As I review the steps Goddard will need to
take to comply with the new guidelines, I recognize the magnitude of the
change. As the Center CIO, I must work with Center stakeholders and take a hard
look at our current and future IT requirements to outline a technical plan that
will position our Center to be as competitive as possible to achieve long-term
sustainability and mission growth. Our workforce must be part of the
solution. Therefore, we must
develop an aligned, skilled, and agile IT workforce equipped to achieve service
and operational excellence in this competitive and dynamic environment.
Effectively using IT as a strategic asset will
require partnering in ways we have never partnered before. I will need to reach
out to many of you to establish stronger relationships in order to truly
understand how I can enable your mission requirements through IT. I ask for
your patience and help as I begin this effort.
While ideological battles over large-scale
fiscal reform play out in the media every day, we have a very real need to enable
the NASA mission through information technology. We must improve IT service quality, lower costs, and
streamline IT governance practices and deliver secure, scalable, and efficient IT
services, products, and operations. These are common goals to achieve a common
good that the entire Center can strive for!
In conclusion I ask the following questions:
Does the challenging budget climate offer opportunities for the Center to come
together? Are there opportunities to share services, capabilities, and assets?
Are there opportunities to lower costs and spur growth using innovation models,
such as the “open-source methodology?”
In my opinion, that answer is a resounding “yes.” We must encourage,
embrace, and accept new ideas and manage the associated risks in order to solve
today’s problems and anticipate tomorrow’s challenges.
-
Cloud Computing: A Game-Changing Business Strategy
-
Posted on Jul 21, 2011 08:11:59 AM | Adrian Gardner
0 Comments
| Permalink
|
-
On June
22, 2011, I presented at the Third Annual Cloud Computing World Forum in London. An
estimated 5,000 attendees, over 120 speakers, and 200 private and public
vendors participated in this forum that featured all of the key players within
the Cloud Computing (CC) and Software as a Service (SaaS) industries. I enjoyed the opportunity to represent our
Agency at the forum, and I presented Cloud as a Game-Changing Business Strategy
with the mindset that if used effectively, it will be able to reduce the cost
and schedule of various projects’ life cycles.
As a highly
technical, governmental agency, NASA’s success leans heavily on its computing
capabilities. Utilizing the most
efficient and effective approaches for storage, processing, and bandwidth is
imperative to ensure our continuing.
Mission-based
projects within our agency often have very long IT life cycles, and they are
also usually one of a kind and complex:
high-stake endeavors. Decisions
for missions begin early in the strategic planning phase. Due to long lead times for procurement
of hardware, IT acquisitions also happen early in the life cycle. Next, because of the long life cycle, upgrades
need to occur regularly; otherwise, what was new when the planning began will
be obsolete by the actual launch date. In addition, the necessary compute must be estimated far in
advance of when it will actually be needed, which can hinder the
interoperability of a project’s components. Another important aspect is that duplicate IT environments
are sometimes created from the variety of projects (e.g., duplicative developmental
environments, integration environments, and test environments) resulting also in
the duplication of software, hardware, and licenses. Furthermore, additional certifications are necessary for any
duplicative environment. These are
obvious drawbacks of our current system.
However, CC, when strategically used, offers solutions to many of these
scenarios. With the availability
of on-demand access to compute and incredible scalability (elasticity), there
is no need for advanced estimating, duplicative environments, non-sharing of
expensive licensing, and non-sharing of HW and SW.
The above
said, CC is not a one-size-fits-all solution, because some systems (e.g.,
embedded systems) do not fall within CC’s use cases. For example, there is some risk involved when dealing with highly
sensitive or complex missions. Also,
there are other obstacles, which would need to be considered carefully before
proceeding: such as the
introduction of possible complications to a project or potential security risks.
Additionally, latency can become
an issue once data and applications become distributed and certain spectrums of
compute (e.g., tightly coupled supercomputing) are needed. Lastly, cultural resistance to change
(if disruptive enough to a particular organization) can be a good reason to
hold off on a rapid migration to CC.
In order
for an organization to utilize CC as a game-changing business strategy, it
should be implemented strategically. First, determine where CC can bring the most value to the
organization; next, gauge the levels of potential willingness for adoption. Finally, choose an appropriate approach
for execution: an enterprise-wide
or a project-by-project approach. The
enterprise approach was discussed above, so I will address the project-centric
approach next.
For certain organizations, as discussed
above, it may not be feasible to implement CC on an enterprise-wide basis. An alternative then is to use CC on a
project-by-project basis. Such would
be possible for six different types of projects.
1.
Those that need the latest IT on a “just-in-time” basis. E.g., system
or mission development and construction or renovations of buildings.
2.
Projects that need stop-gap capability such as transferring information, consolidating servers, or
reorganizing a data center.
3.
Risks associated with projects, businesses, and change such as indecisive funding, changes in demand, or
significant changes to IT.
4.
Missions facing challenging funding such as cyclic demands or economic downturns.
5.
Improving new projects
to include utilizing CC until the scope is understood or as you need compute.
6.
Proofs of concepts. E.g., Scoping of network performance,
and scoping workstation or server performance.
In summary,
CC should be embedded in the project-management process to ensure business success, risk mitigation, significant cost savings,
a better understanding of compute requirements, and the ability to obtain the
latest technologies when buying new hardware. If executed properly, organizations will be able to utilize CC
as a game-changing business strategy.
-
NASA is Answering the Call
-
Posted on May 25, 2011 12:02:06 PM | Adrian Gardner
0 Comments
| Permalink
|
-
Today,
more than ever, is the ideal time for the Federal Government to focus on
improving operating inefficiencies and reducing the costs of IT investments as
opposed to spending over $80 billion per year as it has done in the past. One alternative- which Federal Chief
Information Officer, Vivek Kundra, strives to implement in his 25-point
plan for IT Reform - is a “Cloud First Policy.” This policy requires that agencies default to cloud-based
solutions whenever a secure, reliable, cost-effective cloud option exists. This will ultimately improve operating inefficiencies and overall
service delivery for Federal agencies.
In December 2010, NASA’s Ames Research Center and Goddard
Space Flight Center collaborated to spearhead efforts to provide cloud
computing as a viable option to NASA scientists and engineers.
NASA’s
computing strategy involves various technologies to include Cloud Computing
(CC). Currently, NASA has invested in the OpenStack software for CC and
is working to operationalize a specific version of this software, which NASA
calls Nebula. The Nebula CC software will provide NASA personnel, who
need a private CC solution, with IaaS (Infrastructure as a Service); later implementations
of Nebula will offer PaaS (Platform as a Service).
NASA
is also working on various technologies that support the conservation of energy
(i.e., so-called Green Technologies) and the environment (i.e., so-called Clean
Technologies) such as containerized computing, which offers energy efficiency,
a compact footprint, dense computing power, and reduces the demand for
computing facilities within buildings. A few other technologies, which
NASA has as part of its computing strategy are:
·
Thin-client, zero-client, and remote thick-client technologies;
·
Data-center consolidations;
·
Server consolidations via virtual computing environments (VCEs)
and CC environments (CCEs);
·
Implementing Green Technologies and Clean Technologies in new
building endeavors as well as renovations of existing buildings;
·
Computing-environment consolidations;
·
Smart manufacturing environments that provide improved
capabilities, availability, safety, reliability, and agility; and
·
An IT Storefront concept, which provides a mechanism for
allowing customers and users to specify their needs in their terms and let the
underlying storefront software select a best-practice solution. This may be a CC solution, a mobility
solution, a thick- or thin-client solution, or a combination of these or
something else.
Either way, utilization of these technologies will not only
help NASA’s personnel to focus more on their missions and less on computing
infrastructures; but it will also help the Agency to accrue benefits from
innovations, improve resource utilization, and ultimately lead to
sustainability that is critical in our current fiscal environment.
In the upcoming months, I'm excited to see the how CC will play a role in our science here at Goddard Space Flight Center. Eventually, the phrase, take it to the Cloud, will be commonly iterated amongst our scientists and engineers.